This is an interactive notebook. You can run it locally or use the links below:
Integrating with Weave: Production Dashboard
The GenAI tooling landscape is rapidly evolving - new frameworks, tools, and applications are emerging all the time. Weave aims to be a one-stop-shop for all your GenAI monitoring and evaluation needs. This also means that sometimes it is necessary to integrate with existing platforms or extend Weave to fit the specific needs of your project or organization. In this cookbook, we’ll demonstrate how to leverage Weave’s powerful APIs and functions to create a custom dashboard for production monitoring as an extension to the Traces view in Weave. We’ll focus on:- Fetching traces, costs, feedback, and other metrics from Weave
- Creating aggregate views for user feedback and cost distribution
- Creating visualizations for token usage and latency over time

1. Setup
To follow along this tutorial you’ll only need to install the following packages:2. Implementation
2.1 Initializing Weave Client and Defining Costs
First, we’ll set up a function to initialize the Weave client and add costs for each model.- We have included the standard costs for many standard models but we also make it easy to add your own custom costs and custom models. In the following we’ll show how to add custom costs for a few models and use the standard costs for the rest.
- The costs are calculate based on the tracked tokens for each call in Weave. For many LLM vendor libraries, we will automatically track the token usage, but it is also possible to return custom token counts for any call. See this cookbook on how to define the token count and cost calculation for a custom model - custom cost cookbook.
2.2 Fetching Calls Data from Weave
In order to fetch call data from Weave, we have two options:- Fetching Data call-by-call
- Using high-level APIs
2.2.1 Fetching Data call-by-call
The first option to access data from Weave is to retrieve a list of filtered calls and extract the wanted data call-by-call. For that we can use thecalls_query_stream
API to fetch the calls data from Weave:
calls_query_stream
API: This API allows us to fetch the calls data from Weave.filter
dictionary: This dictionary contains the filter parameters to fetch the calls data - see here for more details.expand_columns
list: This list contains the columns to expand in the calls data.sort_by
list: This list contains the sorting parameters for the calls data.include_costs
boolean: This boolean indicates whether to include the costs in the calls data.include_feedback
boolean: This boolean indicates whether to include the feedback in the calls data.
2.2.2 Using high-level APIs
Instead of goin through every call Weave also provides high-level APIs to directly access model costs, feedback, and other metrics. For example, for the cost, we’ll use thequery_costs
API to fetch the costs of all used LLMs using in project:
2.4 Gathering inputs and generating visualizations
Next, we can generate the visualizations using plotly. This is the most basic dashboard, but you can customize it as you like! For a more complex example, check out a Streamlit example here.Conclusion
In this cookbook, we demonstrated how to create a custom production monitoring dashboard using Weave’s APIs and functions. Weave currently focuses on fast integrations for easy input of data as well as extraction of the data for custom processes.- Data Input:
- Framework-agnostic tracing with @weave-op() decorator and the possibility to import calls from CSV (see related import cookbook)
- Service API endpoints to log to Weave from for various programming frameworks and languages, see here for more details.
- Data Output: